SHI Lei1, ZHANG Xiaohan1, HONG Xiaopeng1,2, LI Jiliang1, DING Wenjie3, SHEN Chao1
1. School of Cyber Science and Engineering, Xi'an Jiaotong University, Xi'an 710049; 2. Faculty of Computing, Harbin Institute of Technology, Harbin 150001; 3. Beijing Megvii Technology Co., Ltd., Beijing 100080
Abstract:Traditional person re-identification(ReID) adversarial attack methods hold some limitations, such as the dependence on the registry(Gallery) to generate adversarial examples and too single examples generation method . To address these problems, an efficient ReID adversarial attack model, multi-scale gradient adversarial examples generation network(MSG-AEGN), is put forward. MSG-AEGN is based on the multi-scale gradient adversarial networks. A multi-scale network structure is adopted to obtain different semantic levels of the input images and the intermediate features of the generator. An attention module is adopted to convert the intermediate features of the generator into multi-scale weights, thereby modulating the image pixels. Finally, the network outputs high-quality adversarial examples to confuse the ReID models. On this basis, an improved adversarial loss function based on the average distance of image features and the triplet loss is proposed to constrain and guide the training of MSG-AEGN. Experiments on three pedestrian ReID datasets, namely Market1501, CUHK03 and Duke-MTMC-ReID, show that the proposed method produces promising attack effects on both the mainstream Re-ID models based on deep convolutional neural networks and the transformer networks. Moreover, MSG-AEGN possesses the advantages of low required attack energy and high structural similarity between adversarial samples and original images.
[1] BAI S, LI Y W, ZHOU Y Y, et al. Adversarial Metric Attack and Defense for Person Re-identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(6): 2119-2126. [2] WANG H J, WANG G R, LI Y, et al. Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification with Deep Mis-ranking // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 339-348. [3] WANG Z B, ZHENG S Y, SONG M K, et al. advPattern: Physical-World Attacks on Deep Person Re-identification via Adversarially Transformable Patterns // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2019: 8340-8349. [4] DING W J, WEI X, JI R R, et al. Beyond Universal Person Re-identification Attack. IEEE Transactions on Information Forensics and Security, 2021, 16: 3442-3455. [5] SZEGEDY C, ZAREMBA W, SUTSKEVER I, ,et al. Intriguing Properties of Neural Networks[C/OL]. [2022-02-14]. https://arxiv.org/pdf/1312.6199.pdf. [6] HE K M, ZHANG X Y, REN S Q, et al. Identity Mappings in Deep Residual Networks // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2016: 630-645. [7] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely Connected Convolutional Networks // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 2261-2269. [8] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition[C/OL]. [2022-02-14]. https://arxiv.org/pdf/1409.1556.pdf. [9] HU J, SHEN L, ALBANIE S, et al. Squeeze-and-Excitation Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011-2023. [10] MA N N, ZHANG X Y, ZHENG H T, et al. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2018: 122-138. [11] HE S T, LUO H, WANG P C, et al. TransReID: Transformer-Based Object Re-identification // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2021: 14993-15002. [12] VASWANI A, SHAZEER N, PARMAR N, et al. Attention Is All You Need // Proc of the 31st International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2017: 6000-6010. [13] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and Harnessing Adversarial Examples[C/OL]. [2022-02-14]. https://arxiv.org/pdf/1412.6572.pdf. [14] ZHENG Z D, ZHENG L, YANG Y, et al. Query Attack via Opposite-Direction Feature: Towards Robust Image Retrieval[C/OL].[2022-02-14]. https://arxiv.org/pdf/1809.02681.pdf. [15] LIU Y P, CHEN X Y, LIU C, et al. Delving into Transferable Adversarial Examples and Black-Box Attacks[C/OL].[2022-02-14]. https://arxiv.org/pdf/1611.02770.pdf. [16] MACHADO G R, SILVA E, GOLDSCHMIDT R R. Adversarial Machine Learning in Image Classification: A Survey Toward the Defender's Perspective. ACM Computing Surveys, 2023, 55(1): 1-38. [17] WANG Z, BOVIK A C, SHEIKH H R, et al. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. [18] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative Adversarial Nets // Proc of the 27th International Confe-rence on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2014: 2672-2680. [19] KARRAS T, LAINE S, AILA T. A Style-Based Generator Architecture for Generative Adversarial Networks // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 4396-4405. [20] ARJOVSKY M, BOTTOU L.Towards Principled Methods for Trai-ning Generative Adversarial Networks[C/OL]. [2022-02-14].https://arxiv.org/pdf/1701.04862.pdf. [21] KARNEWAR A, WANG O. MSG-GAN: Multi-scale Gradients for Generative Adversarial Networks // Proc of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 7796-7805. [22] KARRAS T, AILA T, LAINE S, et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation[C/OL].[2022-02-14]. https://arxiv.org/pdf/1710.10196.pdf. [23] ZHENG L, SHEN L Y, TIAN L, et al. Scalable Person Re-identification: A Benchmark // Proc of the IEEE International Confe-rence on Computer Vision. Washington, USA: IEEE, 2015: 1116-1124. [24] LI W, ZHAO R, XIAO T, et al. DeepReID: Deep Filter Pairing Neural Network for Person Re-identification // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2014: 152-159. [25] RISTANI E, SOLERA F, ZOU R, et al. Performance Measures and a Data Set for Multi-target, Multi-camera Tracking // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2016: 17-35. [26] MIYATO T, KATAOKA T, KOYAMA M, et al. Spectral Normalization for Generative Adversarial Networks[C/OL].[2022-02-14]. https://arxiv.org/pdf/1802.05957.pdf. [27] LUO W J, LI Y J, URTASUN R, et al. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks // Proc of the 29th International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2016: 4905-4913. [28] SCHROFF F, KALENICHENKO D, PHILBIN J. FaceNet: A Unified Embedding for Face Recognition and Clustering // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2015: 815-823. [29] ZHONG Z, ZHENG L, KANG G L, et al. Random Erasing Data Augmentation // Proc of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI, 2020: 13001-13008. [30] KINGMA D P, BA J L. Adam: A Method for Stochastic Optimization[C/OL]. [2022-02-14]. https://arxiv.org/pdf/1412.6980.pdf. [31] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale[C/OL].[2022-02-14]. https://arxiv.org/pdf/2010.11929.pdf. [32] ZHANG X, LUO H, FAN X, et al. AlignedReID: Surpassing Human-Level Performance in Person Re-identification[C/OL].[2022-02-14]. https://arxiv.org/pdf/1711.08184.pdf. [33] YE M, SHEN J B, LIN G J, et al. Deep Learning for Person Re-identification: A Survey and Outlook. IEEE Transactions on Pa-ttern Analysis and Machine Intelligence, 2022, 44(6): 2872-2893. [34] LUO H, JIANG W, GU Y Z, et al. A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification. IEEE Tran-sactions on Multimedia, 2020, 22(10): 2597-2609.